Goto

Collaborating Authors

 Copenhagen


AI can spontaneously develop human-like communication, study finds

The Guardian

Artificial intelligence can spontaneously develop human-like social conventions, a study has found. The research, undertaken in collaboration between City St George's, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise. The study's lead author, Ariel Flint Ashery, a doctoral researcher at City St George's, said the group's work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity. "Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents," said Ashery. "We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone."


Kermut: Composite kernel regression for protein variant effects University of Copenhagen University of Copenhagen Novonesis Lars Olsen Jesper Salomon Wouter Boomsma

Neural Information Processing Systems

Reliable prediction of protein variant effects is crucial for both protein optimization and for advancing biological understanding. For practical use in protein engineering, it is important that we can also provide reliable uncertainty estimates for our predictions, and while prediction accuracy has seen much progress in recent years, uncertainty metrics are rarely reported. We here provide a Gaussian process regression model, Kermut, with a novel composite kernel for modeling mutation similarity, which obtains state-of-the-art performance for supervised protein variant effect prediction while also offering estimates of uncertainty through its posterior. An analysis of the quality of the uncertainty estimates demonstrates that our model provides meaningful levels of overall calibration, but that instance-specific uncertainty calibration remains more challenging.


The Good Robot podcast: Re-imagining voice assistants with Stina Hasse Jørgensen and Frederik Juutilainen

AIHub

Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. To develop voice assistants like Siri and Alexa, companies spend years investigating what sounds like a human voice and what doesn't. But what we've ended up with is just one possibility of the kinds of voices that we could be interacting with. In this episode, we talked to sound engineer Frederik Juutilainen, and assistant professor at the University of Copenhagen, Stina Hasse Jørgensen, about their participation in [multi'vocal], an experimental research project that created an alternative voice assistant by asking people at a rock festival in Denmark to speak into a portable recording box. We talk about voice assistants' inability to stutter, lisp and code switch, and whether a voice can express multiple personalities, genders and ages.


These two new AI benchmarks could help make models less biased

MIT Technology Review

"When we are focused on treating everybody exactly the same, it can be overly stringent," says Angelina Wang, a postdoc at the Stanford Institute for Human-Centered AI and RegLab, who is the lead author of the paper. "It's forcing people to be treated the same even when there are legitimate differences." Ignoring differences between groups may in fact make AI systems less fair. "Sometimes being able to differentiate between groups is actually useful to treat the people from different groups more fairly," says Isabelle Augenstein, a computer science professor at the University of Copenhagen, who was not involved in the research. Wang and her colleagues created benchmarks to evaluate AI systems along two different dimensions that the team devised: difference awareness and contextual awareness.


These new AI benchmarks could help make models less biased

MIT Technology Review

"When we are focused on treating everybody exactly the same, it can be overly stringent," says Angelina Wang, a postdoc at the Stanford Institute for Human-Centered AI and RegLab, who is the lead author of the paper. "It's forcing people to be treated the same even when there are legitimate differences." Ignoring differences between groups may in fact make AI systems less fair. "Sometimes being able to differentiate between groups is actually useful to treat the people from different groups more fairly," says Isabelle Augenstein, a computer science professor at the University of Copenhagen, who was not involved in the research. Wang and her colleagues created eight new benchmarks to evaluate AI systems along two different dimensions that the team devised: descriptive and normative.


Open-Set Recognition of Novel Species in Biodiversity Monitoring

arXiv.org Artificial Intelligence

Machine learning is increasingly being applied to facilitate long-term, large-scale biodiversity monitoring. With most species on Earth still undiscovered or poorly documented, species-recognition models are expected to encounter new species during deployment. We introduce Open-Insects, a fine-grained image recognition benchmark dataset for open-set recognition and out-of-distribution detection in biodiversity monitoring. Open-Insects makes it possible to evaluate algorithms for new species detection on several geographical open-set splits with varying difficulty. Furthermore, we present a test set recently collected in the wild with 59 species that are likely new to science. We evaluate a variety of open-set recognition algorithms, including post-hoc methods, training-time regularization, and training with auxiliary data, finding that the simple post-hoc approach of utilizing softmax scores remains a strong baseline. We also demonstrate how to leverage auxiliary data to improve the detection performance when the training dataset is limited. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.


Spectral Theory for Edge Pruning in Asynchronous Recurrent Graph Neural Networks

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have emerged as a powerful tool for learning on graph-structured data, finding applications in numerous domains including social network analysis and molecular biology. Within this broad category, Asynchronous Recurrent Graph Neural Networks (ARGNNs) stand out for their ability to capture complex dependencies in dynamic graphs, resembling living organisms' intricate and adaptive nature. However, their complexity often leads to large and computationally expensive models. Therefore, pruning unnecessary edges becomes crucial for enhancing efficiency without significantly compromising performance. This paper presents a dynamic pruning method based on graph spectral theory, leveraging the imaginary component of the eigenvalues of the network graph's Laplacian.


Neural Cellular Automata for Decentralized Sensing using a Soft Inductive Sensor Array for Distributed Manipulator Systems

arXiv.org Artificial Intelligence

In Distributed Manipulator Systems (DMS), decentralization is a highly desirable property as it promotes robustness and facilitates scalability by distributing computational burden and eliminating singular points of failure. However, current DMS typically utilize a centralized approach to sensing, such as single-camera computer vision systems. This centralization poses a risk to system reliability and offers a significant limiting factor to system size. In this work, we introduce a decentralized approach for sensing and in a Distributed Manipulator Systems using Neural Cellular Automata (NCA). Demonstrating a decentralized sensing in a hardware implementation, we present a novel inductive sensor board designed for distributed sensing and evaluate its ability to estimate global object properties, such as the geometric center, through local interactions and computations. Experiments demonstrate that NCA-based sensing networks accurately estimate object position at 0.24 times the inter sensor distance. They maintain resilience under sensor faults and noise, and scale seamlessly across varying network sizes. These findings underscore the potential of local, decentralized computations to enable scalable, fault-tolerant, and noise-resilient object property estimation in DMS


Climate Adaptation with Reinforcement Learning: Experiments with Flooding and Transportation in Copenhagen

arXiv.org Artificial Intelligence

Due to climate change the frequency and intensity of extreme rainfall events, which contribute to urban flooding, are expected to increase in many places. These floods can damage transport infrastructure and disrupt mobility, highlighting the need for cities to adapt to escalating risks. Reinforcement learning (RL) serves as a powerful tool for uncovering optimal adaptation strategies, determining how and where to deploy adaptation measures effectively, even under significant uncertainty. In this study, we leverage RL to identify the most effective timing and locations for implementing measures, aiming to reduce both direct and indirect impacts of flooding. Our framework integrates climate change projections of future rainfall events and floods, models city-wide motorized trips, and quantifies direct and indirect impacts on infrastructure and mobility. Preliminary results suggest that our RL-based approach can significantly enhance decision-making by prioritizing interventions in specific urban areas and identifying the optimal periods for their implementation. Our framework is publicly available: https://github.com/


Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach

arXiv.org Artificial Intelligence

Sixth-generation (6G) networks leverage simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) to overcome the limitations of traditional RISs. However, deploying STAR-RISs indoors presents challenges in interference mitigation, power consumption, and real-time configuration. In this work, a novel network architecture utilizing multiple access points (APs) and STAR-RISs is proposed for indoor communication. An optimization problem encompassing user assignment, access point beamforming, and STAR-RIS phase control for reflection and transmission is formulated. The inherent complexity of the formulated problem necessitates a decomposition approach for an efficient solution. This involves tackling different sub-problems with specialized techniques: a many-to-one matching algorithm is employed to assign users to appropriate access points, optimizing resource allocation. To facilitate efficient resource management, access points are grouped using a correlation-based K-means clustering algorithm. Multi-agent deep reinforcement learning (MADRL) is leveraged to optimize the control of the STAR-RIS. Yu Min Park, Sheikh Salman Hassan, Eui-Nam Huh, and Choong Seon Hong are with the Department of Computer Science and Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do 17104, Rep. of Korea, e-mails:{yumin0906, salman0335, johnhuh, cshong}@khu.ac.kr. Yan Kyaw Tun is with the Department of Electronic Systems, Aalborg University, A. C. Meyers Vænge 15, 2450 København, e-mail: ykt@es.aau.dk. Walid Saad is with the Bradley Department of Electrical and Computer Engineering, Virginia Tech, VA, 24061, USA. Additionally, the proposed MADRL approach incorporates convex approximation (CA).